Goto

Collaborating Authors

 AAAI AI-Alert for Apr 11, 2017


Disney creating 'deformable', 'humanoid' robots to hug children

The Independent - Tech

A new patent application has revealed that Disney is looking into the development of robotic versions of its animated characters. The document describes "soft body" robots built specifically for "physical interaction with humans". It doesn't mention any specific characters, but the images alongside the filing show off a bulbous torso resembling that of Big Hero 6's Baymax. The entertainment firm's application repeatedly stresses the importance of safety. It says the robots would have a "rigid support element" and soft, deformable body parts that could potentially be filled with a gas or fluid.


New computer vision challenge wants to teach robots to see in 3D

New Scientist

Computer vision is ready for its next big test: seeing in 3D. The ImageNet Challenge, which has boosted the development of image-recognition algorithms, will be replaced by a new competition next year that aims to help robots see the world in all its depth. Since 2010, researchers have trained image recognition algorithms on the ImageNet database, a go-to set of more than 14 million images hand-labelled with information about the objects they depict. The algorithms learn to classify the objects in the photos into different categories, such as house, steak or Alsatian. Almost all computer vision systems are trained like this before being fine-tuned on a more specific set of images for different tasks.


Consumers Confused About Artificial Intelligence

#artificialintelligence

Most consumers don't really know what artificial intelligence (AI) does, and the basic misunderstanding has some fearful of the technology. In a survey of 6,000 customers in six countries, the findings from Pegasystems study released this week found that consumers are hesitant to embrace AI devices and services. Some 36% are comfortable to engage with businesses using AI even if it results in a better customer experience. About 72% said they have some sort of fear about AI, with 24% worried about robots taking over the world. Only 34% of survey respondents thought they had directly experienced AI, but when asked about the technologies in their lives, the survey found that 84% use at least one AI-powered service or device such as virtual home assistants, intelligent chatbots, or predictive product suggestions.


Siri Update: Apple Working On Customizable, Voice-Specific Alternative For 'Hey Siri' Command

International Business Times

When using Siri, users can give commands the digital assistant to do many tasks. However, the mechanism by which Google Assistant's rival responds to voice commands is proven to be flawed. For one thing, Siri is designed to respond to commands given by anyone, so people aside from the owner of the device can ask the AI assistant to do stuff, including accessing personal data. Fortunately, Apple appears to be working on a solution already. A new patent application by the Cupertino giant contains details on how Samsung's biggest rival is planning to make Siri more secure.


Medable launches Cerebrum, a cloud-based machine learning platform for health apps - iMedicalApps

#artificialintelligence

Medable announced today the launch of Cerebrum, a new cloud-based machine learning tool for healthcare apps including HealthKit, ResearchKit, and CareKit compatible apps. In recent years, we've seen a number of healthcare-focused developers emerge that provide HIPAA-compliant health app development as well as cloud-based data management and analytics. We've covered some of Medable's work with a ResearchKit app focused on patient's with LVADs as well as a virtual care clinic. They also recently launched Axon, a do-it-yourself platform for development of ResearchKit apps. As health apps collect ever increasing types and volumes of data on individuals, a core challenge is how to analyze that data and generate actionable insights that can improve patient care.


AI and Ingredients for Intelligence - DZone Big Data

#artificialintelligence

When I tell people that I work at an AI company, they often follow up with, "So, what kind of machine learning/deep learning do you do?" This isn't surprising, as most of the market attention (and hype) in and around AI has been centered around machine learning and its high-profile subset deep learning and around natural language processing with the rise of the chatbot and virtual assistants. But while machine learning is a core component of artificial intelligence, AI is, in fact, more than just ML. So, what does it really mean for an application to be "intelligent"? What does it take to create a system that is artificially intelligent?


Machine learning could help us tackle depression

#artificialintelligence

Depression is a simple-sounding condition with complex origins that aren't fully understood. Now, machine learning may enable scientists to unpick some of its mysteries in order to provide better treatment. For patients to be diagnosed with Major Depressive Disorder, which is thought to be the result of a blend of genetic, environmental, and psychological factors, they have to display several of a long list of symptoms, such as fatigue or lack of concentration. Once diagnosed, they may receive cognitive behavioral therapy or medication to help ease their condition. But not every treatment works for every patient, as symptoms can vary widely.


Neural Networks for Machine Learning: A Free Online Course

@machinelearnbot

The 78-video playlist above comes from a course called Neural Networks for Machine Learning, taught by Geoffrey Hinton, a computer science professor at the University of Toronto. The videos were created for a larger course taught on Coursera, which gets re-offered on a fairly regularly basis. Neural Networks for Machine Learning will teach you about "artificial neural networks and how they're being used for machine learning, as applied to speech and object recognition, image segmentation, modeling language and human motion, etc." The courses emphasizes " both the basic algorithms and the practical tricks needed to get them to work well." It's geared for an intermediate level learner – comfortable with calculus and with experience programming (Python).


UK driverless vehicle tests begin in London

The Independent - Tech

Driverless pods have started started carrying members of the public around in North Greenwich, London, as part of the GATEway Project. The autonomous vehicles aren't fitted with a steering wheel or a brake pedal, and instead use a collection of five cameras and three lasers to detect and avoid obstacles on a two-mile route near the O2. They can see up to 100m ahead and are capable of performing an emergency stop if necessary, though they have a top speed of just 10mph. The prototype pods being used in Greenwich can carry four passengers at a time, but each of them will have a trained person on board during the three-week trial. The I.F.O. is fuelled by eight electric engines, which is able to push the flying object to an estimated top speed of about 120mph.


The case for cloud-based AI -- GCN

#artificialintelligence

Meagan Metzger is the founder of Dcode42, an accelerator program for companies with innovative technology products for which there is a current or potential future government need. Dcode42 recently partnered with Amazon Web Services to help speed the adoption of artificial intelligence and machine learning for problem solving in government. GCN spoke with Metzger about the role of AI in government and ways cloud-based AI can help government solve challenges. The interview has been edited for length and clarity. GCN: What government challenges do you see AI solving?